Calibrated probability assessments are subjective probabilities assigned by individuals who have been trained to assess probabilities in a way that historically represents their uncertainty.[1][2] In other words, when a calibrated person says they are "80% confident" in each of 100 predictions they made, they will get about 80% of them correct. Likewise, they will be right 90% of the time they say they are 90% certain, and so on.
Calibration training improves subjective probabilities because most people are either "overconfident" or "under-confident" (usually the former).[3] By practicing with a series of trivia questions, it is possible for subjects to fine-tune their ability to assess probabilities. For example, a subject may be asked:
If a person has no idea whatsoever, they will say they are only 50% confident. If they are absolutely certain they are correct, they will say 100%. But most people will answer somewhere in between. If a calibrated person is asked a large number of such questions, they will get about as many correct as they expected. On the other hand, an uncalibrated person who is systematically overconfident may say they are 90% confident in a large number of questions where they only get 60% or 70% of them correct. Calibration training generally involves taking a battery of such tests. Feedback is provided between tests and the subjects refine their probabilities.
Calibration training may also involve learning other techniques that help to compensate for consistent over- or under-confidence. Since subjects are better at placing odds when they pretend to bet money,[4] subjects are taught how to convert calibration questions into a type of betting game which is shown to improve their subjective probabilities.[5] Various collaborative methods have been developed, such as prediction market, so that subjective estimates from multiple individuals can be taken into account.
Stochastic modeling methods such as the Monte Carlo method often use subjective estimates from "subject matter experts". However, since research shows that such experts are very likely to be statistically overconfident, the model will tend to underestimate uncertainty and risk. The Applied Information Economics method systematically uses calibration training as part of a decision modeling process.